2 |
Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
ANLIzing the Adversarial Natural Language Inference Dataset
|
|
|
|
In: Proceedings of the Society for Computation in Linguistics (2022)
|
|
BASE
|
|
Show details
|
|
4 |
Investigating Failures of Automatic Translation in the Case of Unambiguous Gender ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Generalising to German Plural Noun Classes, from the Perspective of a Recurrent Neural Network ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs
|
|
|
|
In: Transactions of the Association for Computational Linguistics, 9 (2021)
|
|
BASE
|
|
Show details
|
|
9 |
UnNatural Language Inference ...
|
|
|
|
Abstract:
Read paper: https://www.aclanthology.org/2021.acl-long.569 Abstract: Recent investigations into the inner-workings of state-of-the-art large-scale pre-trained Transformer-based Natural Language Understanding (NLU) models indicate that they appear to understand human-like syntax, at least to some extent. We provide novel evidence that complicates this claim: we find that state-of-the-art Natural Language Inference (NLI) models assign the same labels to permuted examples as they do to the original, i.e. they are invariant to random word-order permutations. This behavior notably differs from that of humans; we struggle to understand the meaning of ungrammatical sentences. To measure the severity of this issue, we propose a suite of metrics and investigate which properties of particular permutations lead models to be word order invariant. For example, in MNLI dataset we find almost all (98.7%) examples contain at least one permutation which elicits the gold label. Models are even able to assign gold labels to ...
|
|
Keyword:
Computational Linguistics; Condensed Matter Physics; Deep Learning; Electromagnetism; FOS Physical sciences; Information and Knowledge Engineering; Neural Network; Semantics
|
|
URL: https://underline.io/lecture/25789-unnatural-language-inference https://dx.doi.org/10.48448/dv9d-6k56
|
|
BASE
|
|
Hide details
|
|
10 |
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Measuring the Similarity of Grammatical Gender Systems by Comparing Partitions
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
BASE
|
|
Show details
|
|
20 |
Pareto Probing: Trading Off Accuracy for Complexity
|
|
|
|
In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)
|
|
BASE
|
|
Show details
|
|
|
|